skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhexuan Gong"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Tensor train decomposition is widely used in machine learning and quantum physics due to its concise representation of high-dimensional tensors, overcoming the curse of dimensionality. Cross approximation-originally developed for representing a matrix from a set of selected rows and columns-is an efficient method for constructing a tensor train decomposition of a tensor from few of its entries. While tensor train cross approximation has achieved remarkable performance in practical applications, its theoretical analysis, in particular regarding the error of the approximation, is so far lacking. To our knowledge, existing results only provide element-wise approximation accuracy guarantees, which lead to a very loose bound when extended to the entire tensor. In this paper, we bridge this gap by providing accuracy guarantees in terms of the entire tensor for both exact and noisy measurements. Our results illustrate how the choice of selected subtensors affects the quality of the cross approximation and that the approximation error caused by model error and/or measurement error may not grow exponentially with the order of the tensor. These results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors, such as those encountered in the description of quantum many-body states. 
    more » « less
  2. It has been recently shown that a state generated by a one-dimensional noisy quantum computer is well approximated by a matrix product operator with a finite bond dimension independent of the number of qubits. We show that full quantum state tomography can be performed for such a state with a minimal number of measurement settings using a method known as tensor train cross approximation. The method works for reconstructing full rank density matrices and only requires measuring local operators, which are routinely performed in state-of-art experimental quantum platforms. Our method requires exponentially fewer state copies than the best known tomography method for unstructured states and local measurements. The fidelity of our reconstructed state can be further improved via supervised machine learning, without demanding more experimental data. Scalable tomography is achieved if the full state can be reconstructed from local reductions. 
    more » « less
  3. The speed of elementary quantum gates, particularly two-qubit entangling gates, ultimately sets the limit on the speed at which quantum circuits can operate. In this work, we demonstrate experimentally two-qubit entangling gates at nearly the fastest possible speed allowed by the physical interaction strength between two superconducting transmon qubits. We achieve this quantum speed limit by implementing experimental gates designed using a machine learning inspired optimal control method. Importantly, our method only requires the single-qubit drive strength to be moderately larger than the interaction strength to achieve an arbitrary entangling gate close to its analytical speed limit with high fidelity. Thus, the method is applicable to a variety of platforms including those with comparable single-qubit and two-qubit gate speeds, or those with always-on interactions. 
    more » « less